Conversation
| settings["benchmark"], deployment_client, self.config | ||
| ) | ||
| self._function = deployment_client.get_function(self._benchmark) | ||
| self._functions = deployment_client.get_function(self._benchmark, 3) |
There was a problem hiding this comment.
The allocation of multiple function instances should happen in the deployment.
For example, AWS/GCP/Azure will create instances for us automatically. In Local, you need to allocate them as requested by spawning more Docker containers hosting the function.
There was a problem hiding this comment.
Furthermore, the number 3 should not be hardcoded anywhere.
There was a problem hiding this comment.
Sorry I had done this for testing purpose but missed to remove it before pushing. I will remove it.
| self._benchmark_input = self._benchmark.prepare_input( | ||
| storage=self._storage, size=settings["input-size"] | ||
| ) | ||
| for i in range(len(self._functions)): |
There was a problem hiding this comment.
As mentioned above - we should have one function instance, and the Local instance and its triggers will allocate many Docker containers.
There was a problem hiding this comment.
I imagine this seems more complex than anticipated since cloud platforms expose an HTTP trigger, and the local deployment would have a different HTTP address for each instance. We don't want to have to change every experiment - we want to hide this complexity behind the trigger.
The simplest solution would be to implement a new sebs.Local.ScalableHTTPTrigger (or something similar) that would allocate/deallocate container instances, and redirect invocations to the proper HTTPTrigger.
|
|
||
| @abstractmethod | ||
| def create_function(self, code_package: Benchmark, func_name: str) -> Function: | ||
| def create_function(self, code_package: Benchmark, func_name: str, num: int) -> Function: |
There was a problem hiding this comment.
We shouldn't change this API
There was a problem hiding this comment.
I had added the num parameter to know how many containers to dispatch.
| ) | ||
| except docker.errors.NotFound: | ||
| raise RuntimeError(f"Cached container {instance_id} not available anymore!") | ||
| # clear cache |
@mcopik This is the PR related to issue #119 . The perf-cost experiment gives the following output now:
It continues to loop like this.